The Sixth Lecture on Algorithmic Randomness∗

نویسندگان

  • Rod Downey
  • Peter Cholak
چکیده

This paper follows on from the author’s Five Lectures on Algorithmic Randomness. It is concerned with material not found in that long paper, concentrating on Martin-Löf lowness and triviality. We present a hopefully user-friendly account of the decanter method, and discuss recent results of the author with Peter Cholak and Noam Greenberg concerning the class of strongly jump traceable reals introduced by Figueira, Nies and Stephan.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Algorithmic Information Theory and Kolmogorov Complexity

This document contains lecture notes of an introductory course on Kolmogorov complexity. They cover basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), notion of randomness (Martin-Löf randomness, Mises–Church randomness), Solomonoff universal a priori probability and their properties (symmetry of information, connection between a priori probabil...

متن کامل

Lecture notes on descriptional complexity and randomness

A didactical survey of the foundations of Algorithmic Information Theory. These notes are short on motivation, history and background but introduce some of the main techniques and concepts of the field. The “manuscript” has been evolving over the years. Please, look at “Version history” below to see what has changed when.

متن کامل

Lecture Notes on Randomness for Continuous Measures

Most studies on algorithmic randomness focus on reals random with respect to the uniform distribution, i.e. the (1/2, 1/2)-Bernoulli measure, which is measure theoretically isomorphic to Lebesgue measure on the unit interval. The theory of uniform randomness, with all its ramifications (e.g. computable or Schnorr randomness) has been well studied over the past decades and has led to an impressi...

متن کامل

CS 369 N : Beyond Worst - Case Analysis Lecture

This lecture is last on flexible and robust models of “non-worst-case data”. The idea is again to assume that there is some “random aspect” to the data, while stopping well short of average-case analysis. Recall our critique of the latter: it encourages overfitting a brittle algorithmic solution to an overly specific data model. Thus far, we’ve seen two data models that assume only that there i...

متن کامل

CS168: The Modern Algorithmic Toolbox Lecture #6: Markov Chain Monte Carlo

The previous lecture covered several tools for inferring properties of the distribution that underlies a random sample. In this lecture we will see how to design distributions and sampling schemes that will allow us to solve problems we care about. In some instances, the goal will be to understand an existing random process, and in other instances, the problem we hope to solve has no intrinsic ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007